perm filename GEOM2[0,BGB] blob sn#116881 filedate 1974-08-30 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00009 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	{⊂C<NF3αGEOMED.λ30P37I100,0JUFA}
C00006 00003	
C00012 00004	
C00018 00005		<Homogeneous Coordinates>.  The Euclidean routines  involving
C00024 00006		<Metric Routines>. Given  one or several  geometric entities,
C00031 00007	⊂3.4	Image Synthesis: Perspective Projection and Clipping.⊃
C00035 00008	
C00038 00009	⊂3.5	Image Analysis: Interface to CRE.⊃
C00041 ENDMK
C⊗;
{⊂C;<N;F3;αGEOMED.;λ30;P37;I100,0;JUFA}
⊂3.3	Euclidean Routines.⊃

	The Euclidean routines of GEOMED fall roughly into four groups:
transformations,   metrics,  tram routines  and space simulators. The
Euclidean transformations are  translation,   rotation, dilation  and
reflection following Klein's Erlangen Program,  1872. The Euclidean
metric  routines  compute distances,    angles,   areas,  volumes and
inertia tensors. The tram  routines create or alter tram  nodes which
are the main  topic of this section. The  final group of routines
perform  spatial   simulations  such   as  collision,   intersection,
propinquity, occupancy and occultation.

	<Tram  Nodes>. A  tram  node  contains twelve  real  numbers.
Fundamental  to all the Euclidean  routines is the  curious fact that
tram nodes have two  interpretations: they may represent a  coordinate
system  or  they may  represent  a  Euclidean transformation.  ~As  a
coordinate  system~,   the twelve numbers  contain a  location of the
origin of the  coordinate system as well  as the three components  of
each of the three unit vectors of the axes of the coordinate system.
~As a transformation~, the  application of a tram node to  a vertex is
defined  by the  procedure named SCREW, given below.{
λ10;T100,600,700,800,930;JA}
~Tram as a Coordinate System:~ {I∂0,540;} <Tram Node Data Field Names>
	location of origin of coordinates:	XWC,	YWC,	ZWC,	LOCATION VECTOR.
	components of X-axis unit vector:	IX,	IY,	IZ,
	components of Y-axis unit vector:	JX,	JY,	JZ,	ORIENTATION MATRIX.
	components of Z-axis unit vector:	KX,	KY,	KZ.{T-1;}

~Tram as a Transformation:~{W100;JAF3}
COMMENT APPLY TRAM Q TO VERTEX V POSTFIX;
PROCEDURE SCREW (INTEGER V,Q);
BEGIN	REAL X,Y,Z;
	X ← XWC(V);	Y ← YWC(V);	Z ← ZWC(V);
	XWC(V) ← X*IX(Q) + Y*JX(Q) + Z*KX(Q) + XWC(Q);
	YWC(V) ← X*IY(Q) + Y*JY(Q) + Z*KY(Q) + YWC(Q);
	ZWC(V) ← X*IZ(Q) + Y*JZ(Q) + Z*KZ(Q) + ZWC(Q);
END;{λ30;W0;JUFA}
\Generalizing, the  procedure APTRAM(ENTITY,TRAM) applies a tram  to an
arbitrary entity.  The APTRAM procedure is  formed by surrounding the
SCREW  procedure with  suitable  type  checking  and  data  structure
tracing mechanisms so that a tram  can be applied (postfix) to almost
anything: bodies, faces, edges,  vertices, as well as to other trams,
camera models and window nodes.{Q}

{I130,0;}	To repeat for emphasis, a tram  node has two interpretations;
a  tram node may  be interpreted as  a coordinate system  and the very
same tram node may be interpreted as a Euclidean transformation.
A  source of  confusion,  is that  a coordinate  system  tram is  a
definition of  one coordiate system (call it the body coordinates) in
terms of another  coordinate system (call  it the world  coordinates).
The application of a body coordinate system tram to an entity in body
coordinates brings the entity down  into the world coordinate  system
in which the tram is defined. To say it another way, the rule is that
APTRAM(BODY,TRAM)   converts   from   body   coordinates   to   world
coordinates,    whereas   APTRAM(BODY,INTRAM(TRAM))  converts   world
coordinates to body coordinates. The  procedure INTRAM inverts a tram
node  in the manner  given below. As  alluded to in  example #2, body
nodes carry a pointer to a tram defining a system of body coordinates
so that  Euclidean transformtions can be relocated
relative to arbitrary coordinate systems.{W200;λ10;JAF3}
INTEGER PROCEDURE INTRAN (INTEGER Q);
BEGIN "INTRAM"
	REAL X,Y,Z;
	X ← XWC(Q);	Y ← YWC(Q);	Z ← ZWC(Q);
	XWC(Q) ← -(X*IX(Q) + Y*IY(Q) + Z*IZ(Q));
	YWC(Q) ← -(X*JX(Q) + Y*JY(Q) + Z*JZ(Q));
	ZWC(Q) ← -(X*KX(Q) + Y*KY(Q) + Z*KZ(Q));
	IY(Q) ↔ JX(Q);	IZ(Q) ↔ KX(Q);	JZ(Q) ↔ KY(Q);	COMMENT TRANSPOSE;
	RETURN(Q);
END "INTRAM";{W0;λ15;JUFA}
{|;λ10;JAFA}
BOX 3.3 {JC} EUCLIDEAN TRANSFORMATIONS {I∂15,0;T150,300,350;}
	ENTITY	←	APTRAM(ENTITY,TRAM);
	TRAM	←	INTRAM(TRAM);
	RESULT	←	TRANSL(XWD(TRAM,ENTITY),DX,DY,DZ);
	RESULT	←	ROTATE(XWD(TRAM,ENTITY),WX,WY,WZ);
	RESULT	←	SHRINK(XWD(TRAM,ENTITY),SX,SY,SZ);
{|;T-1;λ30;JU;FA}
	Pragmatically, the creation, relocation and  application of a
tram  node  are  invoked all  at  once  by  an appropriate  Euclidean
transformation routine. The transformation routines are listed in Box
3.3 with  APTRAM and  INTRAM.   As a further  pragmatic device,   the
first  argument  of  the Euclideans  is  "microcoded"  using  the XWD
notation  which packs  two  links  into  one word.    The  expression
XWD(A,B)  is equivalent to  the expression  (A*2↑18 + (B  MOD 2↑18)),
where A and  B are positive  integers. When the  entity of the  first
argument of  the  Euclidean routines  is zero,   the  transformations
create and return  a tram node; when the entity of the first argument
is nonzero,   the transformations  create a  tram,  apply  it to  the
entity,  kill  the tram node and  return the entity.   When the first
argument carries a tram as well as an entity (using the XWD notation)
the desired transformation (or creation) is done with  respect to the
coordinate  system  defined  in the  given  tram,    (this is  called
coordinate relocation).  When the first argument is negative the body
coordinates  tram  is  retrieved  and  used  for  relocation  of  the
transformation.  Most  bodies carry a tram pointer (in the link field
named TRAM) which defines body coordinates; the body coordinates of a
face, edge or  vertex are taken as  the TRAM of the BGET  of the face,
edge  or body; a  zero TRAM link  is mapped into  a zero translation,
unit rotation matrix tram by all the Euclidean routines. Finally, the
actual transformation  is specified by  giving three components  of a
vector; the meaning  of a  translation vector is  obvious,   rotation
vectors are explained in a subsequent paragraph and a scale vector is
a  triple  of factors  which  are multiplied  into  the corresponding
components of all the vertices of an entity with respect to  the axes
of transformation.  Reflections are specified as  negative shrinks; a
reflection  on  one or  on  three axes  will evert  a  body's surface
orientation.

	Further routines to create and  alter tram nodes are listed
in Box 3.4. The MKTRAM routine simply returns an
identity tram with zero translation and zero rotation (that is a unit
rotation matrix).  The MKTRMA routine  creates a tram from  the Euler
angles pan,  tilt and swing; see
(Goldstein 1950).  The Euler  angles  come conveniently  close to  the
rotational degrees of freedom  of automatic camera mounts, but unlike a
rotation vector  the Euler  angles  are discontinous  at
zenith and nadir.{|;λ10;T200,700;JAFA}
BOX 3.4 {JC} TRAM ROUTINES {I∂15,0;}
	TRAM ← MKTRAM;	Returns an identity tram.
	TRAM ← MKTRMA(PAN,TILT,SWING);	Makes a tram from Euler angles.
	TRAM ← MKTRMF(FACE);	Makes a tram from a Face.
	TRAM ← MKTRME(EDGE);	Makes a tram from an Edge.
	TRAM ← MKTRMV(WX,WY,WZ);	Makes a tram from a rotation vector.
	TRAM ← NORM(TRAM);	Normalization to unit vectors.
	TRAM ← ORTHO1(TRAM);	Orthogonalize by worst case.
	TRAM ← ORTHO2(TRAM);	Orthogonalize by two cross products:
		 K ← (I CROSS J) and J ← (K CROSS I).
{|;λ30;T-1;JUFA}
	<The Rotation  Matrix>. The nine  elements named IX,  IY, IZ,
JX,  JY, JZ, KX,  KY and  KZ form what  is know  as a three  by three
rotation  matrix.  By  virtue  of  the  definition  of  rigid  object
rotation, the  tram rotation  matrix must be  maintained orthonormal.
(The  trams created by SHRINK  are tolerated as  a special case which
are not considered to be rigid rotations.) Orthonormality is maintained
with  the aid of  three routines:  NORM(TRAM) which  normalizes the
row vectors of a  tram rotation matrix;  ORTHO1 which orthogonalizes  a
rotation matrix  by comparing the  sums of pairs  of dot  products of
pairs of the three  unit vectors; the unit vector that is most out of
allignment is recomputed by  crossing the other two (ORTHO1  performs
its  check  twice   and  then  exits);  and   ORTHO2,  which  coerces
orthogonality by  setting row vector K to the cross product of rows I
and J, followed by setting row vector J to the cross product of  rows K
and I.

	<The Rotation Vector>. All 3-D rotations  can be expressed as
a  vector where  the direction  of the vector  specifies the  axis of
rotation and where the magnitude  of the vector specifies the  amount
of rotation in radians.  Given such a rotation vector WX, WY, WZ with
direction  cosines CX,  CY, CZ and magnitude  W in radians; let CW be
cosine(W) and SW be sine(W);  and let a function called  SIGN
return positive or negative one depending on whether its argument is
positive or negative; then the relation between a rotation matrix and
a rotation vector can be listed:{λ10;T10,430,850;JA;FA}
~Rotation vector to Rotation matrix:~
	IX = (1-CW)*CX*CX + CW;	IY = (1-CW)*CY*CX + CZ*SW;	IZ = (1-CW)*CZ*CX - CY*SW;
	JX = (1-CW)*CX*CY - CZ*SW;	JY = (1-CW)*CY*CY + CW;	JZ = (1-CW)*CZ*CY + CX*SW;
	KX = (1-CW)*CX*CZ + CY*SW;	KY = (1-CW)*CY*CZ - CX*SW;	KZ = (1-CW)*CZ*CZ + CW;
{λ10;T40,100,150;}
~Rotation matrix to Rotation vector:~
	WX	=	SIGN(JZ-KY)*ACOS(0.5*(IX+JY+KZ-1))*SQRT(+IX-JY-KZ)/(3-IX-JY-KZ));
	WY	=	SIGN(KX-IZ)*ACOS(0.5*(IX+JY+KZ-1))*SQRT(-IX+JY-KZ)/(3-IX-JY-KZ));
	WZ	=	SIGN(IY-JX)*ACOS(0.5*(IX+JY+KZ-1))*SQRT(-IX-JY+KZ)/(3-IX-JY-KZ));{
I∂-15,0;λ30;T-1;JUFA}

	<Homogeneous Coordinates>.  The Euclidean routines  involving
trams  could  be   written  out  in  terms  of  the  4-D  homogeneous
coordinates frequently  found in  computer graphics,  by prefixing  a
column to each tram and a fourth component to each vertex.{λ10;I∂-10,0;
T200,400,600,800,1000;JAFA}
		1	XWC	YWC	ZWC
	{I∂18,∂0;}TRAM4D ={I∂-18,∂0;}	0	IX	IY	IZ
		0	JX	JY	JZ
		0	KX	KY	KZ{λ30;T-1;JUFA}
\I did not use homogeneous coordinates in GEOMED  for three  reasons:
first, the computer at hand, (a PDP-10) has floating point arithmetic
hardware so that homogeneous components were not needed for  numerical
scaling;  second,    the  homogeneous  representation  requires  more
coordinates  per vertex  and more  multiplications per transformation
than the GEOMED representation;  and third, my intuition  is stronger
in  affine  metric geometry  than  it  is  in homogeneous  projective
geometry.

	<Standard Conventions.> There  are
several nettlesome details related to  rotation,   translation  and
projection among  which a  computer geometer  must distinguish:  (i).
matrix   vs.     algebraic   notation;  (ii).   postfix  vs.   prefix
transformation application; (iii). row vs. column vertices; (iv). 4-D
homogeneous vs.  3-D  affine coordinates;  (v).  rotation vector  vs.
Euler angles and  so on. At the moment,   I favor algebraic notation,
postfix transformations, row vertices,  3-D coordinates and  rotation
specification by vector; a demonstrably
superior natural set of standard conventions
probably does not exist.

	In GEOMED, tram nodes were until recently called frame nodes,
however  I wish to  abandon all use  of the word  <frame> for three
reasons: first,  the  term is  ambiguous  and overused  (even  within
graphics alone);  second,  the  term does  not include the  notion of
transformation; and third,  the term risks confusion (or association)
with  the  connotations of  (Minsky  74)  and  (Winograd  74); i.e. the
connotation  of a  <Frame  System> as  a modular  mental  universe of
stereotyped world  situations.   In  geometric  modeling,   the  word
<frame>  can  be  replaced  in  all  three   of  its  usual  graphics
applications: the <frame of reference> or <coordinate frame> is now a
<coordinate system>,  the <frame> of a movie film is now  an <image>,
the <frame> of a display screen is now a <window> or <border>.

	<Metric Routines>. Given  one or several  geometric entities,
the  Euclidean  metric routines  listed  in Box  3.5  compute length,
area,  volume,   angle or  moments of  inertia. The DISTANCE  routine
computes the distance  between two anythings in  a reasonable manner;
the  measure routine  returns the volume,  area or  length of bodies,
faces  or edges  respectively  (by  a pragmatic  argument  hack,  the
measure of  a negative body is  its surface area).  The ANGLE routine
computes the angle between two  entities by returning the arc  cosine
of  the  normalized  inner  product  of   two  vectors:  vertices  are
interpreted  as vectors  from the  origin of the  body in  which they
belong, edge are vectors from their NVT to their PVT, faces are taken
as  their normal  vector and  bodies are  represented by  the K  unit
vector  of their  body coordinates tram;  trams and  cameras also are
mapped into K unit vectors.

{|;λ10;JAFA}
BOX 3.5 {JC} METRIC ROUTINES {I∂15,0;T200,400,600;}
	VALUE	←	DISTANCE(ENTITY,ENTITY);
	VALUE	←	MEASURE(ENTITY);
	RADIANS	←	ANGLE(ENTITY,ENTITY);
	RADIANS	←	ANGL3V(V1,V2,V3);
	RADIANS	←	ANGLCW(EDGE);
	RADIANS	←	ANGLCCW(EDGE);
	VALUE	←	DETERM(TRAM);
	NODE	←	INERTIA(BODY);
{|;λ30;T-1;JUFA}
\Since the arc cosine function returns an angular value
between  zero and pi;  the routines ANGL3V,   ANGLCW  and ANGLCCW
employ the arc tangent to compute an angular value between negative pi
and positive  pi.   The ANGL3V  return the  angle between  the vector
(V3-V2)  and (V2-V1), the  ANGLCW(E) returns the angle  between E and
PCW(E), ANGLCW(-E)  returns arctan  of  E and  NCW(E); likewise  ANGLCCW
returns values for E and PCCW(E)  or E and NCCW(W). The DETERM of a tram is
the determinate of  the rotation  matrix of a  tram.   Finally,   the
INERTIA of a body is a sixtuple: MXX, MYY, MZZ, PXY, PXZ, PYZ packed
into the  first six words of a node  and representing the moments and
products of the  intertia tensor  of a polyhedron  of uniform  (unit)
density associated with the given body. The inertia routine takes the
liberty  of   updating  the  origin  of  the  body  coordinates  to
correspond to the center of  mass and to orient the K unit  vector of
the body parallel to the principal axis of inertia.

	<Spatial Simulation>. The difficult space routines perform
occultation and intersection and are explained in Chapters 4 and 5
respectively. The simple space routines, listed in Box 3.6, perform
propinquity, collision detection and spatial compare.{|;λ10;JAFA}
BOX 3.6 {JC} SIMPLE SPACE ROUTINES {I∂15,0;T200,400,600,800;}
	HEXAHEDRON	←	MKBUCK(BODY);
	V-PIERCE	←	COMPFE(FACE,EDGE);
	FLAG	←	COMPEE(EDGE,EDGE);
	FLAG	←	WITH2D(FACE,VERTEX);
	FLAG	←	WITH3D(BODY,VERTEX);
	FLAG	←	COLDET(B1,B2,EPSILON).
{|;λ30;T-1;JUFA}
\The MKBUCK routine returns a hexahedron that buckets the given body.
The  COMPFE compares a face  and an edge in  3-D for intersection, if
the arguments are disjoint then  zero is returned,  if the  arguments
intersect then the edge is split  and the new vertex is positioned at
the  locus  where  the  edge pierces  the  face.  The  COMPEE routine
determines whether two edges  cross in a given perspective  view. The
within 2-D  routine, WITH2D,  determines whether  a vertex appears to
be interior to  a given  face in a  perspective view  and the  WITH3D
determines whether a  given vertex falls  interior to a body  in 3-D.
The COLDET routine  compares all the  vertices and  faces of two
objects for propinquity within an  epsilson as well as all the  edges
of the  two objects.  Temporary collision  pointers are  left between
vertices  and the  nearest alien  collision face  as well  as between
temporary collision  vertices. Collision vertices  are formed at  the
foot of the shortest line segment between the skew lines of two edges
that pass within the epsilon distance of each other.

⊂3.4	Image Synthesis: Perspective Projection and Clipping.⊃

	Image synthesis is the process of generating various kinds of
images: vector display,  video, contour map or mosaic. Independent of
the  final image  representation  the  process  always  requires  the
operations of  perspective  projection  and clipping.  The
perspective projection takes the 3-D world locus of every potentially
visible vertex  and computes  a  3-D camera  center coordinate  locus
followed by  a perspective projection  in the fashion  illustrated in
the PROJECT procedure given below.{λ10;W100;JAF3;}
INTEGER PROCEDURE PROJECT (INTEGER V,CAMERA);
BEGIN "PROJECT"
	INTEGER TRM; REAL X,Y,Z,XCC,YCC,ZCC;
COMMENT TRANSFORM FROM WORLD COORDINATES TO CAMERA COORDIATES;
	TRM ← TRAM(CAMERA);
	X ← XWC(V) - XWC(TRM);
	Y ← YWC(V) - YWC(TRM);
	Z ← ZWC(V) - ZWC(TRM);
	XCC ← X*IX(TRM) + Y*IY(TRM) + Z*IZ(TRM);
	YCC ← X*JX(TRM) + Y*JY(TRM) + Z*JZ(TRM);
	ZCC ← X*KX(TRM) + Y*KY(TRM) + Z*KZ(TRM);
COMMENT PERSPECTIVE PROJECTION TRANSFORMATION;
COMMENT NOTA BENE: ZPP(V) is positive when vertex is in view of camera ! ;
	XPP(V) ← SCALEX(CAMERA)*XCC/ZCC;	COMMENT ( SCALEX = -FOCAL/PDX );
	YPP(V) ← SCALEY(CAMERA)*YCC/ZCC;	COMMENT ( SCALEY = -FOCAL/PDY );
	ZPP(V) ← SCALEZ(CAMERA)    /ZCC;	COMMENT ( SCALEZ = -FOCAL/PDZ );
	RETURN (V);
END "PROJECT";{W0;}

{λ30;JUFA}\The perspective projection transformation is  a 3-D
to  3-D  mapping; the  third  component, ZPP, allows  the  hidden line
eliminator to perform orthographic depth comparisons. The
perspective  projection quite  literally is taking  the whole  world
model  and crushing it  into a  slanty space between  the camera lens
center and the camera focal  plane. The camera scales are  defined in
terms of  the ficticious 3-D pixel  dimensions PDX, PDY,  PDZ and the
physical camera focal plane distance, FOCAL. The pixel dimensions are
arbitrarily defined as PDY=PDZ=40 microns and  PDX=AR*PDY where AR is
the  aspect ratio  of the  camera; the aspect  ratio can  be directly
measured by taking the ratio of the width to height of the image of a
large black sphere  on a white background, AR is  usually almost one.
The  focal plane distance is typically  between 10 and 50 millimeters
and is derived from definition (FOCAL=FR*PDY)
of the focal ratio, FR, which  can be
simply measured as explained in Section 9.1.

	The term  clipping refers to  the process of  computing which
parts of  the world model are in view of  the camera. In GEOMED there
are several clipper routines: one for fast transparent refresh, three
for hidden line elimination and one more for clipping the contents of
2-D  display windows that  may be scrolled about.   Three dimensional
clipping can be  factored into  a Z-clipper and  an XY-clipper.   The
Z-clipper determines  which portions of the model  are in the visible
3-D halfspace and splits edges and faces that cross the focal  plane.
The XY-clipper determines which portion of a  2-D perspective edge is
within  a given 2-D  rectangular window  (with sides parallel  to the
coordiate axes).  The  XY-clip is  done  by  first applying  an  easy
outsider test:  endpoints of the  edge both below,   above,   left or
right  of the window; followed by an easy  insider test: endpoints of the
edge both  inside  the window;  followed by  the  evaluation of  four
polynomials  of   the  form  A*X+B*Y+C  where  A,B,C   are  the  edge
coefficents and X,Y are the locus  of corners of the clip window.  If
all four polynomials have the same sign the  edge is a hard outsider,
otherwise the  intersection of a side of the  window and the edge can
be detected from alternating signs and the locus of intersection can be
computed from the edge coefficients.

⊂3.5	Image Analysis: Interface to CRE.⊃

	Although there are  no actual honest image  analysis routines
currently implemented in GEOMED,  the internal GEOMED environment was
designed for image based model synthesis and model  verification. The
routine INCRE(FILENAME) inputs from a  disk file a CRE node structure
that consists  of a film of contour images, contour images consist of
levels,  levels consist of polygons and polygons  consist of vectors.
In  GEOMED,   the CRE  polygons become  two-faced lamina  bodies; the
contour levels hierarchy becomes a parts tree structure; and a new  kind
of GEOMED node called an image is introduced.

	The root  of the GEOMED  data structure  is a universe  node,
which is  the head of a ring of world  nodes. World nodes have a ring
of body nodes  and a ring  of camera nodes  each camera represents  a
physical camera so  that there might be at most  three or four camera
nodes.   Each  camera has two  rings of  images: a  ring of perceived
images and  a corresponding  ring of  simulated  images. The  perceived
image  ring is  created  by INCRE  and  the simulated  image ring  is
created by the hidden line eliminator,  thus providing a  environment
for the development of polygon based image analysis.
This  completes  the general  description  of  the  geometric
modeling system called GEOMED.
{I∂400,630;H2;X0.6;*HORNY.PLT;}